dance style
Dance Style Classification using Laban-Inspired and Frequency-Domain Motion Features
Hamscher, Ben, Brosch, Arnold, Binninger, Nicolas, Dejna, Maksymilian Jan, Maag, Kira
Dance is an essential component of human culture and serves as a tool for conveying emotions and telling stories. Identifying and distinguishing dance genres based on motion data is a complex problem in human activity recognition, as many styles share similar poses, gestures, and temporal motion patterns. This work presents a lightweight framework for classifying dance styles that determines motion characteristics based on pose estimates extracted from videos. We propose temporal-spatial descriptors inspired by Laban Movement Analysis. These features capture local joint dynamics such as velocity, acceleration, and angular movement of the upper body, enabling a structured representation of spatial coordination. To further encode rhythmic and periodic aspects of movement, we integrate Fast Fourier Transform features that characterize movement patterns in the frequency domain. The proposed approach achieves robust classification of different dance styles with low computational effort, as complex model architectures are not required, and shows that interpretable motion representations can effectively capture stylistic nuances.
Dance Any Beat: Blending Beats with Visuals in Dance Video Generation
Wang, Xuanchen, Wang, Heng, Liu, Dongnan, Cai, Weidong
Automated choreography advances by generating dance from music. Current methods create skeleton keypoint sequences, not full dance videos, and cannot make specific individuals dance, limiting their real-world use. These methods also need precise keypoint annotations, making data collection difficult and restricting the use of self-made video datasets. To overcome these challenges, we introduce a novel task: generating dance videos directly from images of individuals guided by music. This task enables the dance generation of specific individuals without requiring keypoint annotations, making it more versatile and applicable to various situations. Our solution, the Dance Any Beat Diffusion model (DabFusion), utilizes a reference image and a music piece to generate dance videos featuring various dance types and choreographies. The music is analyzed by our specially designed music encoder, which identifies essential features including dance style, movement, and rhythm. DabFusion excels in generating dance videos not only for individuals in the training dataset but also for any previously unseen person. This versatility stems from its approach of generating latent optical flow, which contains all necessary motion information to animate any person in the image. We evaluate DabFusion's performance using the AIST++ dataset, focusing on video quality, audio-video synchronization, and motion-music alignment. We propose a 2D Motion-Music Alignment Score (2D-MM Align), which builds on the Beat Alignment Score to more effectively evaluate motion-music alignment for this new task. Experiments show that our DabFusion establishes a solid baseline for this innovative task. Video results can be found on our project page: https://DabFusion.github.io.
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > Greece > Attica > Athens (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Media > Music (0.68)
- Leisure & Entertainment (0.68)
Dance Style Transfer with Cross-modal Transformer
Yin, Wenjie, Yin, Hang, Baraka, Kim, Kragic, Danica, Björkman, Mårten
We present CycleDance, a dance style transfer system to transform an existing motion clip in one dance style to a motion clip in another dance style while attempting to preserve motion context of the dance. Our method extends an existing CycleGAN architecture for modeling audio sequences and integrates multimodal transformer encoders to account for music context. We adopt sequence length-based curriculum learning to stabilize training. Our approach captures rich and long-term intra-relations between motion frames, which is a common challenge in motion transfer and synthesis work. We further introduce new metrics for gauging transfer strength and content preservation in the context of dance movements. We perform an extensive ablation study as well as a human study including 30 participants with 5 or more years of dance experience. The results demonstrate that CycleDance generates realistic movements with the target style, significantly outperforming the baseline CycleGAN on naturalness, transfer strength, and content preservation.
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Questionnaire & Opinion Survey (0.94)
- Research Report > New Finding (0.35)
- Leisure & Entertainment (0.69)
- Media > Music (0.46)
Algorithm can identify a person by looking at their dance style
In other words, the research from the University of Jyväskylä, indicates that the way you dance is unique, and from the subtle differences between dance patterns, algorithms can tell it's you rather than someone else. The objective of the research was to apply machine learning to understand how and why music affects people the way that it does. To explore this question, the Finnish scientists used motion capture technology (much like the technology now common movies with a CGI element) to gain an insight about the uniqueness of dance moves and to also extrapolate what the dance move might say about the person. From studying different patterns of dancing, the researchers are of the view that they can determine how extroverted or neurotic a person is and also draw insights in the particular mood a person is experiencing. The recent study used seventy-three people, who were motion captured dancing to eight different forms of music: Blues, Country, Dance/Electronica, Jazz, Metal, Pop, Reggae and Rap.